Scalable Multitask Representation Learning for Scene Classification Supplementary Material
نویسندگان
چکیده
1. Implementation Details In this section we discuss certain implementation details of our STL-SDCA and MTL-SDCA solvers. We begin with some notation and then proceed with technical details for each solver. Notation: Let {(xi, yit) : 1 ≤ i ≤ n, 1 ≤ t ≤ T} be the input/output pairs of the multitask learning problem, where xi ∈ R, yit ∈ {±1}, T is the number of tasks, and n is the number of training examples per task. We assume that all tasks have the same training examples even though this can be easily generalized. The standard single task learning (STL) approach learns linear predictors wt in the original space R. In contrast, the proposed multitask learning (MTL) method learns a matrix U in Rd×k, which is used to map the original features xi into a new representation zi via zi = Uxi. The linear predictors wt are then learned in the subspace R. Let X in Rd×n be the matrix of stacked vectors xi, Z in Rk×n the matrix of stacked vectors zi, Y in {±1}n×T the matrix of labels, and W in R·×T the matrix of stacked predictors wt (the dimensionality of wt will be clear from the context). We define the following kernel matrices: K = KX = X>X , KZ = Z>Z, and M = KW = W>W . As mentioned in the paper, both solvers use precomputed kernel matrices and work with dual variables αt in R. We define A in Rn×T as the matrix of stacked dual variables for all tasks. STL-SDCA: The STL optimization problem for a task t is defined as follows:
منابع مشابه
Supplementary Material: Analysis-Synthesis Dictionary Learning for Universality-Particularity Representation based Classification
In the supplementary material, we give more analysis and experimental results for the proposed ASDL-UP. In the following sections, we will present the algorithm of dictionary atom pair updating in universal ASDL, control experiments to evaluate different components of ASDL-UP, analysissynthesis dictionary learned from face images, recognition on the dataset of Scene-15 (S. Lazebnik 2007), and t...
متن کاملCo-Clustering for Multitask Learning
This paper presents a new multitask learning framework that learns a shared representation among the tasks, incorporating both task and feature clusters. The jointlyinduced clusters yield a shared latent subspace where task relationships are learned more effectively and more generally than in state-of-the-art multitask learning methods. The proposed general framework enables the derivation of m...
متن کاملScalable Hierarchical Multitask Learning in Sequence Biology
Multitask learning methods investigate the challenge of combining information from several related problem domains. For a large family of multitask problems, relationships between tasks can be described by a hierarchical structure. This is particularly the case for many problems in Computational Biology, where different tasks correspond to different organisms, whose relationship to each other i...
متن کاملThe Benefit of Multitask Representation Learning
We discuss a general method to learn data representations from multiple tasks. We provide a justification for this method in both settings of multitask learning and learning-to-learn. The method is illustrated in detail in the special case of linear feature learning. Conditions on the theoretical advantage offered by multitask representation learning over independent task learning are establish...
متن کاملDeep Unsupervised Domain Adaptation for Image Classification via Low Rank Representation Learning
Domain adaptation is a powerful technique given a wide amount of labeled data from similar attributes in different domains. In real-world applications, there is a huge number of data but almost more of them are unlabeled. It is effective in image classification where it is expensive and time-consuming to obtain adequate label data. We propose a novel method named DALRRL, which consists of deep ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2014